817 research outputs found

    Robust Counterparts of Inequalities Containing Sums of Maxima of Linear Functions

    Get PDF
    This paper adresses the robust counterparts of optimization problems containing sums of maxima of linear functions and proposes several reformulations. These problems include many practical problems, e.g. problems with sums of absolute values, and arise when taking the robust counterpart of a linear inequality that is affine in the decision variables, affine in a parameter with box uncertainty, and affine in a parameter with general uncertainty. In the literature, often the reformulation that is exact when there is no uncertainty is used. However, in robust optimization this reformulation gives an inferior solution and provides a pessimistic view. We observe that in many papers this conservatism is not mentioned. Some papers have recognized this problem, but existing solutions are either too conservative or their performance for different uncertainty regions is not known, a comparison between them is not available, and they are restricted to specific problems. We provide techniques for general problems and compare them with numerical examples in inventory management, regression and brachytherapy. Based on these examples, we give tractable recommendations for reducing the conservatism.robust optimization;sum of maxima of linear functions;biaffine uncertainty;robust conic quadratic constraints

    Kriging Models That Are Robust With Respect to Simulation Errors

    Get PDF
    In the field of the Design and Analysis of Computer Experiments (DACE) meta-models are used to approximate time-consuming simulations. These simulations often contain simulation-model errors in the output variables. In the construction of meta-models, these errors are often ignored. Simulation-model errors may be magnified by the meta-model. Therefore, in this paper, we study the construction of Kriging models that are robust with respect to simulation-model errors. We introduce a robustness criterion, to quantify the robustness of a Kriging model. Based on this robustness criterion, two new methods to find robust Kriging models are introduced. We illustrate these methods with the approximation of the Six-hump camel back function and a real life example. Furthermore, we validate the two methods by simulating artificial perturbations. Finally, we consider the influence of the Design of Computer Experiments (DoCE) on the robustness of Kriging models.Kriging;robustness;simulation-model error

    Safe Approximations of Chance Constraints Using Historical Data

    Get PDF
    This paper proposes a new way to construct uncertainty sets for robust optimization. Our approach uses the available historical data for the uncertain parameters and is based on goodness-of-fit statistics. It guarantees that the probability that the uncertain constraint holds is at least the prescribed value. Compared to existing safe approximation methods for chance constraints, our approach directly uses the historical-data information and leads to tighter uncertainty sets and therefore to better objective values. This improvement is significant especially when the number of uncertain parameters is low. Other advantages of our approach are that it can handle joint chance constraints easily, it can deal with uncertain parameters that are dependent, and it can be extended to nonlinear inequalities. Several numerical examples illustrate the validity of our approach.robust optimization;chance constraint;phi-divergence;goodness-of-fit statistics

    On Markov Chains with Uncertain Data

    Get PDF
    In this paper, a general method is described to determine uncertainty intervals for performance measures of Markov chains given an uncertainty region for the parameters of the Markov chains. We investigate the effects of uncertainties in the transition probabilities on the limiting distributions, on the state probabilities after n steps, on mean sojourn times in transient states, and on absorption probabilities for absorbing states. We show that the uncertainty effects can be calculated by solving linear programming problems in the case of interval uncertainty for the transition probabilities, and by second order cone optimization in the case of ellipsoidal uncertainty. Many examples are given, especially Markovian queueing examples, to illustrate the theory.Markov chain;Interval uncertainty;Ellipsoidal uncertainty;Linear Programming;Second Order Cone Optimization

    Immunizing Conic Quadratic Optimization Problems Against Implementation Errors

    Get PDF
    We show that the robust counterpart of a convex quadratic constraint with ellipsoidal implementation error is equivalent to a system of conic quadratic constraints. To prove this result we first derive a sharper result for the S-lemma in case the two matrices involved can be simultaneously diagonalized. This extension of the S-lemma may also be useful for other purposes. We extend the result to the case in which the uncertainty region is the intersection of two convex quadratic inequalities. The robust counterpart for this case is also equivalent to a system of conic quadratic constraints. Results for convex conic quadratic constraints with implementation error are also given. We conclude with showing how the theory developed can be applied in robust linear optimization with jointly uncertain parameters and implementation errors, in sequential robust quadratic programming, in Taguchi’s robust approach, and in the adjustable robust counterpart.Conic Quadratic Program;hidden convexity;implementation error;robust optimization;simultaneous diagonalizability;S-lemma

    The Effect of Transformations on the Approximation of Univariate (Convex) Functions with Applications to Pareto Curves

    Get PDF
    In the literature, methods for the construction of piecewise linear upper and lower bounds for the approximation of univariate convex functions have been proposed.We study the effect of the use of increasing convex or increasing concave transformations on the approximation of univariate (convex) functions.In this paper, we show that these transformations can be used to construct upper and lower bounds for nonconvex functions.Moreover, we show that by using such transformations of the input variable or the output variable, we obtain tighter upper and lower bounds for the approximation of convex functions than without these approximations.We show that these transformations can be applied to the approximation of a (convex) Pareto curve that is associated with a (convex) bi-objective optimization problem.approximation theory;convexity;convex/concave transformation;Pareto curve

    The Correct Kriging Variance Estimated by Bootstrapping

    Get PDF
    The classic Kriging variance formula is widely used in geostatistics and in the design and analysis of computer experiments.This paper proves that this formula is wrong.Furthermore, it shows that the formula underestimates the Kriging variance in expectation.The paper develops parametric bootstrapping to estimate the Kriging variance.The new method is tested on several artificial examples and a real-life case study.These results demonstrate that the classic formula underestimates the true Kriging variance.Kriging;Kriging variance;bootstrapping;design and analysis of computer experiments (DACE);Monte Carlo;global optimization;black-box optimization

    Multivariate Convex Approximation and Least-Norm Convex Data-Smoothing

    Get PDF
    The main contents of this paper is two-fold.First, we present a method to approximate multivariate convex functions by piecewise linear upper and lower bounds.We consider a method that is based on function evaluations only.However, to use this method, the data have to be convex.Unfortunately, even if the underlying function is convex, this is not always the case due to (numerical) errors.Therefore, secondly, we present a multivariate data-smoothing method that smooths nonconvex data.We consider both the case that we have only function evaluations and the case that we also have derivative information.Furthermore, we show that our methods are polynomial time methods.We illustrate this methodology by applying it to some examples.approximation theory;convexity;data-smoothing
    corecore